Recurrent Neural Network with Soft 'Winner Takes All' Principle for the TSP
نویسندگان
چکیده
This paper shows the application of Wang’s Recurrent Neural Network with the 'Winner Takes All' (WTA) principle in a soft version to solve the Traveling Salesman Problem. In soft WTA principle the winner neuron is updated at each iteration with part of the value of each competing neuron and some comparisons with the hard WTA are made in this work with instances of the TSPLIB (Traveling Salesman Problem Library). The results show that the soft WTA guarantees equal or better results than the hard WTA in most of the problems tested.
منابع مشابه
Recurrent Neural Networks with the Soft ‘Winner Takes All’ Principle Applied to the Traveling Salesman Problem
This work shows the application of Wang’s Recurrent Neural Network with the "Winner Takes All" (WTA) principle to solve the classic Operational Research problem called the Traveling Salesman Problem. The upgrade version proposed in this work for the ‘Winner Takes All’ principle is called soft, because the winning neuron is updated with only part of the activation values of the other competing n...
متن کاملA Recurrent Neural Network to Traveling Salesman Problem
One technique that uses Wang’s Recurrent Neural Networks with the “Winner Takes All” principle is presented to solve two classical problems of combinatorial optimization: Assignment Problem (AP) and Traveling Salesman Problem (TSP). With a set of appropriate choices for the parameters in Wang’s Recurrent Neural Network, this technique appears to be efficient in solving the mentioned problems in...
متن کاملA new heuristic procedure to solve the Traveling Salesman problem
This paper presents a heuristic technique that uses the Wang Recurrent Neural Network with the "Winner Takes All" principle to solve the Traveling Salesman Problem. When the Wang Neural Network presents solutions for the Assignment Problem with all constraints satisfied, the "Winner Takes All" principle is applied to the values in the Neural Network’s decision variables, with the additional con...
متن کاملCompetitive Winner-Takes-All Clustering in the Domain of Graphs
We present a theoretical foundation for competitive learning in the domain of graphs within a connectionist framework. In the first part of this contribution we embed graphs in an Euclidean space to facilitate competitive learning in the domain of graphs. We adopt constitutive concepts of competitive learning like the scalar products, metrics, and the weighted mean for graphs. The first part is...
متن کاملFast computation with spikes in a recurrent neural network.
Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010